Piecewise line-search techniques for constrained minimization by quasi-Newton algorithms

نویسنده

  • Jean Charles Gilbert
چکیده

Defining a consistent technique for maintaining the positive definiteness of the matrices generated by quasi-Newton methods for equality constrained minimization remains a difficult open problem. In this paper, we review and discuss the results obtained with a new technique based on an extension of the Wolfe line-search used in unconstrained optimization. The idea is to follow a piecewise linear path approximating a smooth curve, along which some reduced curvature conditions can be satisfied. This technique can be used for the quasi-Newton versions of the reduced Hessian method and for the SQP algorithm. A new argument based on geometrical considerations is also presented. It shows that in reduced quasi-Newton methods the appropriateness of the vectors used to update the matrices may depend on the type of bases representing the tangent space to the constraint manifold. In particular, it is argued that for orthonormal bases, it is better using the projection of the change in the gradient of the Lagrangian rather than the change in the reduced gradient. Finally, a strong q-superlinear convergence theorem for the reduced quasiNewton algorithm is discussed. It shows that if the sequence of iterates converges to a strong solution, it converges one step q-superlinearly. Conditions to achieve this speed of convergence are mild; in particular, no assumptions are made on the generated matrices. This result is, to our knowledge, the first extension to constrained optimization of a similar result proved by Powell in 1976 for unconstrained problems.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Network Learning Through Optimally Conditioned Quadratically Convergent Methods Requiring NO LINE SEARCH

Neural Network Learning algorithms based on Conjugate Gradient Techniques and Quasi Newton Techniques such as Broyden, DFP, BFGS, and SSVM algorithms require exact or inexact line searches in order to satisfy their convergence criteria. Line searches are very costly and slow down the learning process. This paper will present new Neural Network learning algorithms based on Hoshino's weak line se...

متن کامل

Bound constrained quadratic programming via piecewise quadratic functions

We consider the strictly convex quadratic programming problem with bounded variables. A dual problem is derived using Lagrange duality. The dual problem is the minimization of an unconstrained, piecewise quadratic function. It involves a lower bound of λ1, the smallest eigenvalue of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes th...

متن کامل

Proximal Quasi-Newton Methods for Nondifferentiable Convex Optimization

Some global convergence properties of a variable metric algorithm for minimization without exact line searches, in R. 23 superlinear convergent algorithm for minimizing the Moreau-Yosida regularization F. However, this algorithm makes use of the generalized Jacobian of F, instead of matrices B k generated by a quasi-Newton formula. Moreover, the line search is performed on the function F , rath...

متن کامل

Modify the linear search formula in the BFGS method to achieve global convergence.

<span style="color: #333333; font-family: Calibri, sans-serif; font-size: 13.3333px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: justify; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; -webkit-text-stroke-width: 0px; background-color: #ffffff; text-dec...

متن کامل

Parallel Lagrange-newton-krylov-schur Algorithms for Pde-constrained Optimization Part Ii: the Lagrange-newton Solver and Its Application to Optimal Control of Steady Viscous Flows

In this paper we follow up our discussion on algorithms suitable for optimization of systems governed by partial differential equations. In the first part of of this paper we proposed a Lagrange-Newton-Krylov-Schur method (LNKS) that uses Krylov iterations to solve the Karush-Kuhn-Tucker system of optimality conditions, but invokes a preconditioner inspired by reduced space quasi-Newton algorit...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2006